With the growth of editing and sharing images through the internet, the importance of protecting the images' authorship has increased. Robust watermarking is a known approach to maintaining copyright protection. Robustness and imperceptibility are two factors that are tried to be maximized through watermarking. Usually, there is a trade-off between these two parameters. Increasing the robustness would lessen the imperceptibility of the watermarking. This paper proposes an adaptive method that determines the strength of the watermark embedding in different parts of the cover image regarding its texture and brightness. Adaptive embedding increases the robustness while preserving the quality of the watermarked image. Experimental results also show that the proposed method can effectively reconstruct the embedded payload in different kinds of common watermarking attacks. Our proposed method has shown good performance compared to a recent technique.
translated by 谷歌翻译
图像染色是增强扭曲数字图像的有效方法。不同的初始化方法使用相邻像素的信息来预测丢失像素的值。最近,深度神经网络已经用于学习图像的结构和语义细节以获得避免目的。在本文中,我们提出了一种用于图像染色的网络。此网络类似于U-Net,从图像中提取各种功能,导致更好的结果。我们通过用输出图像的恢复像素替换损坏的像素来改善最终结果。我们的实验结果表明,该方法产生了与传统方法相比的高质量结果。
translated by 谷歌翻译
The data used to train deep neural network (DNN) models in applications such as healthcare and finance typically contain sensitive information. A DNN model may suffer from overfitting. Overfitted models have been shown to be susceptible to query-based attacks such as membership inference attacks (MIAs). MIAs aim to determine whether a sample belongs to the dataset used to train a classifier (members) or not (nonmembers). Recently, a new class of label based MIAs (LAB MIAs) was proposed, where an adversary was only required to have knowledge of predicted labels of samples. Developing a defense against an adversary carrying out a LAB MIA on DNN models that cannot be retrained remains an open problem. We present LDL, a light weight defense against LAB MIAs. LDL works by constructing a high-dimensional sphere around queried samples such that the model decision is unchanged for (noisy) variants of the sample within the sphere. This sphere of label-invariance creates ambiguity and prevents a querying adversary from correctly determining whether a sample is a member or a nonmember. We analytically characterize the success rate of an adversary carrying out a LAB MIA when LDL is deployed, and show that the formulation is consistent with experimental observations. We evaluate LDL on seven datasets -- CIFAR-10, CIFAR-100, GTSRB, Face, Purchase, Location, and Texas -- with varying sizes of training data. All of these datasets have been used by SOTA LAB MIAs. Our experiments demonstrate that LDL reduces the success rate of an adversary carrying out a LAB MIA in each case. We empirically compare LDL with defenses against LAB MIAs that require retraining of DNN models, and show that LDL performs favorably despite not needing to retrain the DNNs.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
增强现实(AR)游戏是一个丰富的环境,用于研究和测试提供微妙的用户指导和培训的计算系统。在特定的计算机系统中,旨在增强用户状况意识的计算机系统受益于AR耳机中可用的传感器和计算功率。在这篇正在进行的论文中,我们提出了一个新的环境,以研究情况意识和注意力指导(SAAG):棋盘游戏Carcassonne的增强现实版本。我们还介绍了生产SAAG管道的最初工作,包括创建游戏状态编码,游戏玩法AI的开发和培训以及情况建模和凝视跟踪系统的设计。
translated by 谷歌翻译
野外的机器学习模型已被证明在训练过程中容易受到特洛伊木马攻击的影响。尽管已经提出了许多检测机制,但已证明强大的适应性攻击者对他们有效。在本文中,我们旨在回答考虑一个聪明和适应性对手的问题:(i)强大的攻击者将木马所需的最小实例数量是多少? (ii)这样的攻击者是否有可能绕过强大的检测机制?我们提供了这种模型中发生的对抗和检测机制之间的对抗能力和战略相互作用的分析表征。我们根据输入数据集的分数来表征对手的能力,该输入数据集的分数可以嵌入特洛伊木马触发器。我们表明,损耗函数具有一个集中结构,该结构导致设计有效的算法,以确定这一部分,并在最优性方面可证明的界限。我们提出了一种子模型特洛伊算法,以确定样品的最小分数,以注入特洛伊木马触发器。为了逃避对木马模型的检测,我们将对手和特洛伊木马检测机制之间的战略相互作用建模为两人游戏。我们表明,对手以概率赢得了游戏,从而绕开了检测。我们通过证明特洛伊木马模型和干净模型的输出概率分布在遵循Min-Max(MM)Trojan算法时相同。我们对MNIST,CIFAR-10和EUROSAT数据集进行了广泛的评估。结果表明,(i)使用subsodular trojan算法,对手需要将特洛伊木马扳机嵌入很少的样品中,以在Trojan和干净的样品上获得高精度,以及(ii)MM Trojan算法会产生训练有素的经训练的Trojan以概率1逃避检测的模型。
translated by 谷歌翻译